Covid-19在大流行的不同阶段对公众构成了不成比例的心理健康后果。我们使用一种计算方法来捕获引发在线社区对大流行的焦虑的特定方面,并研究这些方面如何随时间变化。首先,我们使用主题分析在R/covid19 \ _support的Reddit帖子样本($ n $ = 86)中确定了九个焦虑(SOA)。然后,我们通过在手动注释的样本($ n $ = 793)上训练Reddit用户的焦虑来自动将SOA标记在较大的年代样本中($ n $ = 6,535)。 9个SOA与最近开发的大流行焦虑测量量表中的项目保持一致。我们观察到,在大流行的前八个月,Reddit用户对健康风险的担忧仍然很高。尽管案件激增稍后发生,但这些担忧却大大减少了。通常,随着大流行的进展,用户的语言披露了SOA的强烈强度。但是,在本研究涵盖的整个期间,人们对心理健康的担忧和未来稳步增长。人们还倾向于使用更强烈的语言来描述心理健康问题,而不是健康风险或死亡问题。我们的结果表明,尽管Covid-19逐渐削弱,但由于适当的对策而逐渐削弱了作为健康威胁,但该在线小组的心理健康状况并不一定会改善。我们的系统为人口健康和流行病学学者奠定了基础,以及时检查引起大流行焦虑的方面。
translated by 谷歌翻译
计算幽默检测系统很少对幽默反应的主观性进行建模,或者考虑对幽默的替代反应 - 即犯罪。我们分析了不同年龄段的男性和女性注释者的大量幽默和犯罪评级数据集。我们发现女性比男性更强烈地联系这两个概念,她们倾向于给出较低的幽默评分和更高的进攻得分。我们还发现,幽默与犯罪之间的相关性随着年龄的增长而增加。尽管幽默发现没有性别或年龄差异,但女性和较旧的注释者表示,她们比男性更频繁地理解笑话文本。我们讨论对计算幽默检测和下游任务的影响。
translated by 谷歌翻译
Vision transformers (ViTs) are quickly becoming the de-facto architecture for computer vision, yet we understand very little about why they work and what they learn. While existing studies visually analyze the mechanisms of convolutional neural networks, an analogous exploration of ViTs remains challenging. In this paper, we first address the obstacles to performing visualizations on ViTs. Assisted by these solutions, we observe that neurons in ViTs trained with language model supervision (e.g., CLIP) are activated by semantic concepts rather than visual features. We also explore the underlying differences between ViTs and CNNs, and we find that transformers detect image background features, just like their convolutional counterparts, but their predictions depend far less on high-frequency information. On the other hand, both architecture types behave similarly in the way features progress from abstract patterns in early layers to concrete objects in late layers. In addition, we show that ViTs maintain spatial information in all layers except the final layer. In contrast to previous works, we show that the last layer most likely discards the spatial information and behaves as a learned global pooling operation. Finally, we conduct large-scale visualizations on a wide range of ViT variants, including DeiT, CoaT, ConViT, PiT, Swin, and Twin, to validate the effectiveness of our method.
translated by 谷歌翻译
Objective: We aim to develop an open-source natural language processing (NLP) package, SODA (i.e., SOcial DeterminAnts), with pre-trained transformer models to extract social determinants of health (SDoH) for cancer patients, examine the generalizability of SODA to a new disease domain (i.e., opioid use), and evaluate the extraction rate of SDoH using cancer populations. Methods: We identified SDoH categories and attributes and developed an SDoH corpus using clinical notes from a general cancer cohort. We compared four transformer-based NLP models to extract SDoH, examined the generalizability of NLP models to a cohort of patients prescribed with opioids, and explored customization strategies to improve performance. We applied the best NLP model to extract 19 categories of SDoH from the breast (n=7,971), lung (n=11,804), and colorectal cancer (n=6,240) cohorts. Results and Conclusion: We developed a corpus of 629 cancer patients notes with annotations of 13,193 SDoH concepts/attributes from 19 categories of SDoH. The Bidirectional Encoder Representations from Transformers (BERT) model achieved the best strict/lenient F1 scores of 0.9216 and 0.9441 for SDoH concept extraction, 0.9617 and 0.9626 for linking attributes to SDoH concepts. Fine-tuning the NLP models using new annotations from opioid use patients improved the strict/lenient F1 scores from 0.8172/0.8502 to 0.8312/0.8679. The extraction rates among 19 categories of SDoH varied greatly, where 10 SDoH could be extracted from >70% of cancer patients, but 9 SDoH had a low extraction rate (<70% of cancer patients). The SODA package with pre-trained transformer models is publicly available at https://github.com/uf-hobiinformatics-lab/SDoH_SODA.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
因果发现已成为希望从观察数据中发现因果关系的科学家和从业者的重要工具。尽管大多数先前的因果发现方法都隐含地假设没有专家领域知识可用,但从业者通常可以从先前的经验中提供此类域知识。最近的工作已将域知识纳入基于约束的因果发现中。但是,大多数基于约束的方法都假定因果忠诚,这在实践中经常被违反。因此,人们对基于精确搜索得分的因果发现方法的重新关注,这些方法不假定因果关系,例如基于*基于*的方法。但是,在领域知识的背景下,没有考虑这些方法。在这项工作中,我们专注于有效地将几种类型的领域知识整合到基于*的因果发现中。在此过程中,我们讨论并解释了域知识如何减少图形搜索空间,然后对潜在的计算收益进行分析。我们通过有关合成和真实数据的实验来支持这些发现,表明即使少量领域知识也可以显着加快基于*基于*的因果关系并提高其绩效和实用性。
translated by 谷歌翻译
多实施学习(MIL)被广泛用于对病理整体幻灯片图像(WSIS)的计算机辅助解释,以解决缺乏像素或贴片的注释。通常,这种方法直接应用“自然图像驱动”的MIL算法,该算法忽略了WSIS的多尺度(即金字塔)性质。现成的MIL算法通常部署在单个WSIS(例如20x放大倍率)上,而人类病理学家通常以多尺度的方式汇总全球和局部模式(例如,通过放大不同大型)。在这项研究中,我们提出了一种新型的跨尺度注意机制,以明确地将尺度间相互作用汇总到单个MIL网络的克罗恩病(CD)(CD),这是炎症性肠病的一种形式。本文的贡献是两个方面:(1)提出了一种跨尺度注意机制,以从不同分辨率的多尺度相互作用汇总特征; (2)生成差异多尺度注意的可视化,以定位可解释的病变模式。通过训练来自20名CD患者的约250,000 H&E染色的上升结肠(AC)斑块,在不同尺度上训练30个健康对照样品,我们的方法在曲线下(AUC)得分为0.8924,与基线模型相比达到0.8924。官方实施可在https://github.com/hrlblab/cs-mil上公开获得。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
医疗人工智能(AI)的最新进展已提供了可以达到临床专家水平绩效的系统。但是,当在与训练环境不同的临床环境中评估时,这种系统往往会证明次优的“分布式”性能。一种常见的缓解策略是使用特定地点数据为每个临床环境开发单独的系统[1]。但是,这很快变得不切实际,因为医疗数据很耗时,可以注释且昂贵[2]。因此,“数据有效概括”的问题给医学AI开发带来了持续的困难。尽管代表性学习的进展显示出希望,但并未对其好处进行严格的研究,特别是用于分布的设置。为了应对这些挑战,我们提出了RESEDIS,这是一种统一的代表学习策略,以提高医学成像AI的鲁棒性和数据效率。雷雷迪斯使用大规模监督转移学习与自我监督学习的通用组合,几乎不需要特定于任务的自定义。我们研究各种医学成像任务,并使用回顾性数据模拟三个现实的应用程序场景。 RESEDIS表现出明显改善的分布性能,而在强有力的基线上,诊断准确性相对相对提高了11.5%。更重要的是,我们的策略会导致对医学成像AI的强大数据有效的概括,并使用跨任务的1%至33%的重新培训数据匹配强有力的监督基线。这些结果表明,Repedis可以显着加速医学成像AI开发的生命周期,从而为医学成像AI提供了重要的一步,以产生广泛的影响。
translated by 谷歌翻译